20 research outputs found
Ithemal: Accurate, Portable and Fast Basic Block Throughput Estimation using Deep Neural Networks
Predicting the number of clock cycles a processor takes to execute a block of
assembly instructions in steady state (the throughput) is important for both
compiler designers and performance engineers. Building an analytical model to
do so is especially complicated in modern x86-64 Complex Instruction Set
Computer (CISC) machines with sophisticated processor microarchitectures in
that it is tedious, error prone, and must be performed from scratch for each
processor generation. In this paper we present Ithemal, the first tool which
learns to predict the throughput of a set of instructions. Ithemal uses a
hierarchical LSTM--based approach to predict throughput based on the opcodes
and operands of instructions in a basic block. We show that Ithemal is more
accurate than state-of-the-art hand-written tools currently used in compiler
backends and static machine code analyzers. In particular, our model has less
than half the error of state-of-the-art analytical models (LLVM's llvm-mca and
Intel's IACA). Ithemal is also able to predict these throughput values just as
fast as the aforementioned tools, and is easily ported across a variety of
processor microarchitectures with minimal developer effort.Comment: Published at 36th International Conference on Machine Learning (ICML)
201
CoMEt: x86 Cost Model Explanation Framework
ML-based program cost models have been shown to yield highly accurate
predictions. They have the capability to replace heavily-engineered analytical
program cost models in mainstream compilers, but their black-box nature
discourages their adoption. In this work, we propose the first method for
obtaining faithful and intuitive explanations for the throughput predictions
made by ML-based cost models. We demonstrate our explanations for the
state-of-the-art ML-based cost model, Ithemal. We compare the explanations for
Ithemal with the explanations for a hand-crafted, accurate analytical model,
uiCA. Our empirical findings show that high similarity between explanations for
Ithemal and uiCA usually corresponds to high similarity between their
predictions
Diålogos para repensar a gestão educativa na América Latina
El presente siglo estĂĄ demarcado por dos aspectos claves, la generaciĂłn de
conocimiento y el trabajo en redes. En relaciĂłn con el primer aspecto, se
hace necesario que organizaciones como las universidades o centros de nivel
superior, incentiven, gestionen y produzcan conocimiento. Entendido esto
como un proceso continuo, donde las diferentes instancias y actores de estas
instituciones participan en investigaciones a nivel formativo (tesis de grado)
o en grupos de investigaciĂłn. Para esto se hace necesario que los gestores
de los diferentes programas trabajen de manera mancomunada y con
objetivos claros, asĂ como contar con normativas que alienten la producciĂłn
de conocimiento, ademĂĄs de los incentivos respectivos.
El segundo aspecto tiene que ver con el trabajo en redes lo cual es
importante y necesario en estos tiempos, porque estas redes pueden estar
dentro de las organizaciones, tanto locales como extranjeras. Es relevante
porque permite que los integrantes de una organizaciĂłn aprendan con otros,
dialoguen y discutan sobre temas afines o comunes y, ademĂĄs, se embarquen
en trabajos que conducen hacia un mismo fin. Claro estĂĄ que este trabajo
en redes humaniza a los sujetos de diferentes contextos, porque hacen suyas
las mismas problemĂĄticas, ademĂĄs de darles la posibilidad de encontrar
respuesta a esos problemas de manera conjunta.
Estos aspectos clave, y de seguro con otros, hicieron posible la concreciĂłn
del presente libro, titulado âDiĂĄlogos para repensar la gestiĂłn educativa en
LatinoamĂ©ricaâ, el cual es la primera producciĂłn de la Red de Posgrados
en Educación en Latinoamérica (REDPEL), conformada por programas de
posgrado de seis universidades de la regiĂłn.
Previo a esto, damos a conocer algunos hitos de la conformaciĂłn de la
REDPEL.
A mediados de diciembre del 2019, se reunieron los docentes Dr. Daniel
Johnson (Universidad de Chile) y Dr. Alex SĂĄnchez (Pontificia Universidad
CatĂłlica del PerĂș) en Lima, en el marco del primer congreso de grupos de
investigaciĂłn en currĂculo, donde dialogaron sobre la posibilidad de llevar a
cabo un trabajo conjunto entre diversos programas de Latinoamérica, con el
fin de dar a conocer lo que realizan y hacer pensamiento sobre temas comunes
de la regiĂłn. A mediados del 2020, dialogaron la Coordinadora de la MaestrĂa en GestiĂłn
Educativa Dra. Liliana Ăvila y la Coordinadora de la MaestrĂa en EducaciĂłn Dra.
Elsa Aponte, ambas de la Universidad PedagĂłgica y TecnolĂłgica de Colombia
(UPTC), el docente Dr. Samuel Mendonça de la Pontificia Universidad Católica
de Campinas, Brasil (PUC Campinas), junto al Dr. Daniel Johnson y Dr. Alex
SĂĄnchez para realizar la primera actividad conjunta, la cual consistiĂł en los
encuentros de gestiĂłn y currĂculo, donde se expondrĂan los avances de las
tesis de cada programa y serĂan comentados por docentes, tambiĂ©n de esos
programas. En octubre de ese año, el dĂa 20, se realizĂł el Encuentro de Estudios
de Posgrado en CurrĂculo (UPTC, U. Chile, PUCP) y, el dĂa 27, el Encuentro de
Estudios de Posgrado en gestiĂłn educativa PUC Campinas, UPTC, PUCP). Se
presentaron dos estudiantes por cada programa y participaron, en total, doce
docentes comentaristas.
Estas actividades acercarĂan a los diferentes programas a un trabajo
en equipo, ademĂĄs de valorar lo que vienen realizando a nivel de tesis de
maestrĂa, asĂ como interactuar con docentes y estudiantes en una sola âsalaâ.
Este espacio académico permitió mantener un diålogo mås cercano entre las
diferentes organizaciones. Queda claro que la virtualidad facilitĂł el logro del
evento.
Posteriormente, se acogerĂa la propuesta de cuatro estudiantes de la
PUCP para realizar un evento académico entre estudiantes, lo que significó
que cada programa sea representado por un estudiante para que llevara
a cabo esta actividad. Los estudiantes serĂan los siguientes: Barbara Diaz
(Universidade Federal de Ouro Preto, Brasil); Verónica Muñoz (U. Chile);
Nidia GarcĂa (UPTC); Carolina Trentini ( PUC Campinas); Arcelia Diaz (UAZ);
Claudia Achata, Jhennifer RamĂrez, Denis Muñoz y Tatiana Micalay (PUCP).
Todo este grupo realizĂł, el 30 de octubre la I Conferencia Internacional de
Estudiantes de MaestrĂa en EducaciĂłn, con el tema âLa educaciĂłn en tiempos
de Incertidumbre: Reflexiones desde el CurrĂculo y la GestiĂłn Educativaâ. Cabe
resaltar que fueron apoyados por docentes de los diferentes programas y
grupos de investigaciĂłn
A comprehensive overview of radioguided surgery using gamma detection probe technology
The concept of radioguided surgery, which was first developed some 60 years ago, involves the use of a radiation detection probe system for the intraoperative detection of radionuclides. The use of gamma detection probe technology in radioguided surgery has tremendously expanded and has evolved into what is now considered an established discipline within the practice of surgery, revolutionizing the surgical management of many malignancies, including breast cancer, melanoma, and colorectal cancer, as well as the surgical management of parathyroid disease. The impact of radioguided surgery on the surgical management of cancer patients includes providing vital and real-time information to the surgeon regarding the location and extent of disease, as well as regarding the assessment of surgical resection margins. Additionally, it has allowed the surgeon to minimize the surgical invasiveness of many diagnostic and therapeutic procedures, while still maintaining maximum benefit to the cancer patient. In the current review, we have attempted to comprehensively evaluate the history, technical aspects, and clinical applications of radioguided surgery using gamma detection probe technology
Artifact for OOPSLA 2023 Paper "Turaco: Complexity-Guided Data Sampling for Training Neural Surrogates of Programs"
Artifact for Turaco: Complexity-Guided Data Sampling for Training Neural Surrogates of Programs
This repository contains the implementation of the Turaco programming language and its analysis, and the experiments in the paper "Turaco: Complexity-Guided Data Sampling for Training Neural Surrogates of Programs"
Programming with neural surrogates of programs
Surrogates, models that mimic the behavior of programs, form the basis of a
variety of development workflows. We study three surrogate-based design
patterns, evaluating each in case studies on a large-scale CPU simulator.
With surrogate compilation, programmers develop a surrogate that mimics the
behavior of a program to deploy to end-users in place of the original program.
Surrogate compilation accelerates the CPU simulator under study by .
With surrogate adaptation, programmers develop a surrogate of a program then
retrain that surrogate on a different task. Surrogate adaptation decreases the
simulator's error by up to . With surrogate optimization, programmers
develop a surrogate of a program, optimize input parameters of the surrogate,
then plug the optimized input parameters back into the original program.
Surrogate optimization finds simulation parameters that decrease the
simulator's error by compared to the error induced by expert-set
parameters.
In this paper we formalize this taxonomy of surrogate-based design patterns.
We further describe the programming methodology common to all three design
patterns. Our work builds a foundation for the emerging class of workflows
based on programming with surrogates of programs
Turaco: Complexity-Guided Data Sampling for Training Neural Surrogates of Programs
Programmers and researchers are increasingly developing surrogates of programs, models of a subset of the observable behavior of a given program, to solve a variety of software development challenges. Programmers train surrogates from measurements of the behavior of a program on a dataset of input examples. A key challenge of surrogate construction is determining what training data to use to train a surrogate of a given program.
We present a methodology for sampling datasets to train neural-network-based surrogates of programs. We first characterize the proportion of data to sample from each region of a program's input space (corresponding to different execution paths of the program) based on the complexity of learning a surrogate of the corresponding execution path. We next provide a program analysis to determine the complexity of different paths in a program. We evaluate these results on a range of real-world programs, demonstrating that complexity-guided sampling results in empirical improvements in accuracy